专利摘要:
application performance measurement and reporting. systems and methods related to performance measurement and reporting are described here. in one method, a first application is profiled in a first profiling session to generate a first parameter data set related to the performance of the first application's segments. a session report is created based on the first parameter data set. in addition, based in part on the session report, comments for one or more of the application's segments are received. a consolidated report is then generated based on the first parameter data set and the comments.
公开号:BR102012006254B1
申请号:R102012006254-2
申请日:2012-03-20
公开日:2020-03-10
发明作者:Swarup CHATTERJEE;Kallol Saha Chowdhury;Somnath Sengupta
申请人:Tata Consultancy Services Limited;
IPC主号:
专利说明:

“METHOD AND SYSTEM FOR MEASUREMENT AND REPORTING APPLICATION PERFORMANCE” Technical Field This article refers to the definition of software profile, and in particular, but not exclusively, to systems and methods for measuring performance and reporting for applications software.
Background of the Invention The performance measurement of a software application is generally performed with the objective of determining the execution performance of the various components of the software application under various workload conditions. Performance measurement can be performed at various stages in the life cycle of the software application, including during development, testing, profiling and monitoring. It is also used to validate and verify other attributes of the software application, such as scalability, reliability, and resource usage. Generally, the performance of the software application is measured based on various performance parameters, for example, memory statistics, processor statistics, network statistics, thread statistics, response time, etc. Such performance measurement is usually carried out by an application team and other people involved. The user who performs the performance measurement then analyzes the measurement results and can make suggestions to the application team. The suggestion may be related to various aspects of the application, hardware configuration, runtime conditions, among others. Based on the measurement results and suggestions, the application team can take further action as needed. Sometimes, implementing a suggestion to improve a performance parameter can degrade the performance of the application in relation to another performance parameter. Therefore, a user can again measure the various performance parameters and send suggestions for improving performance. This cycle of providing suggestions and measuring the performance of a software application generally continues until the performance parameters of the software application fall within a previously specified acceptable range.
Summary This summary is presented to introduce concepts related to systems and methods for performance measurement and reporting, and the concepts are further described below in the detailed description. This summary is not intended to identify essential aspects of the claimed matter, nor should it be used to determine or limit the scope of the claimed matter.
In an implementation, a parameter data set related to segments whose profile has been profiled in an application is generated and a session report is created based on the parameter data set. Comments are received for one or more of the segments, based in part on the session report. A consolidated report is generated based on the parameter data set and the comments.
Brief Description of the Drawings The detailed description is presented with reference to the attached figures. In the figures, the leftmost digit (s) of a reference number identify the figure in which the reference number appears for the first time. The same numbers are used in all drawings to designate similar aspects and components. Fig. 1 illustrates an implementation of a performance measurement and reporting system, according to an embodiment of the present matter. Fig. 2 illustrates a method for measuring and reporting performance, according to an embodiment of the present matter. Fig. 3 illustrates a method for measuring and reporting comparative performance for an application, according to another implementation of the present article.
Detailed Description Systems and methods for measuring and reporting performance are described here. The systems and methods can be implemented in a variety of computing systems. Computing systems that can implement the method (s) described include, without limitation, large computers, workstations, personal computers, desktop computers, minicomputers, servers, systems with multiple processors, laptops and the like. The performance measurement of a software application is performed for a variety of reasons, including quality assurance and to verify that the application meets commercial, functional and technical requirements, and works as expected. The software application is sometimes referred to as an application from now on. Performance measurement can be performed at any stage in the life cycle of an application, including during development, testing, profiling and monitoring. Generally, performance is measured based on various performance parameters, such as processor utilization, memory utilization, network utilization, input-output utilization, database query statistics, response time statistics, statistics session-related, processing speed statistics, process thread statistics, and queue-related statistics. It will be clear that this is only an indicative list of performance parameters, and is not intended to be exhaustive. The user who performs the performance measurement then analyzes the measurement results and can make suggestions, for example, to the application team if the performance parameters are above or below a previously specified range. The suggestions can be related, for example, to the application or the hardware, or to the runtime conditions. For this purpose, the performance parameters are usually recorded as screen snapshots and the suggestions are provided in an independent text or in a spreadsheet document. Based on performance measurement results and user suggestions, appropriate actions can be taken to meet the previously specified acceptable range. Sometimes, an action taken to improve a performance parameter can degrade the performance of the application in relation to another performance parameter. For example, in an attempt to improve the application's response time, the application's memory usage may have increased. Therefore, the application can again be profiled in a different session, measurements of various performance parameters can be recorded, and suggestions can be made for further modifications, if necessary. It will be clear that a profiling session refers to a period of time during which the application's performance is measured. This cycle of measuring application performance and making suggestions generally continues iteratively until the application's performance parameters fall within the acceptable range previously specified.
As mentioned above, conventionally, suggestions made after performance measurement are not systematically recorded or stored, thus making it difficult to diagnose performance problems at a later stage. Furthermore, to compare an application's performance between two profiling sessions, screen snapshots need to be compared manually. Profiling sessions can differ in several ways, such as in application versions, hardware configuration, runtime conditions, session periods and session durations. The term “session period” is used to refer to the time, during the execution of the application, in which the profiling session is carried out. For example, the profiling session can be performed as soon as the application has run or after the application has been running for a certain amount of time. The session duration is used to refer to the length of time that the profiling session takes place. Since there may be several reasons why performance measurements differ in different profiling sessions, as discussed here, it is difficult to analyze performance issues manually.
For example, consider a case where the results of the two profiling sessions are compared, where the profiling sessions correspond to the performance measurement of different versions of a software application. If there is a performance problem related to memory usage in the eleventh version of the software, it would be difficult to analyze from which previous version the performance problem may have arisen, what actions were taken in response to it, etc. In the example above, an optimization of processor utilization in the third version may not be the root cause of an increase in memory utilization and in the various optimizations in subsequent versions; memory usage may have exceeded the acceptable range previously specified in the eleventh version.
Using conventional performance measurement systems and methods, it would be very difficult and laborious to detect the cause of the performance problem in the application, as there is no direct way to compare the performance measurement results of two or more profiling sessions. simultaneously. In addition, in the absence of documentation of the suggestions and corresponding actions taken, it is difficult to determine the cause of the performance problem.
In another example, the performance of an application may vary based on the session period, that is, the time when the profiling session took place and measurements were taken. For example, if an application uses a caching technique for fast data retrieval, the application's performance would vary over time as the cache memory is filled, it would reach a steady state after the application is running for certain. time. In such a case, to compare the performance measurement results of your profiling sessions that vary across the session period, a boring manual comparison is also required.
In yet another example, an application's performance may vary due to the runtime conditions under which performance is measured. For example, performance could vary based on the operating environment used, the size of the input data, the number of iterations to be performed, among others. In such a scenario, again, manual comparison of performance measurements over multiple sessions with different runtime conditions is tedious.
In this matter, systems and methods for measuring and reporting application performance are proposed. In one embodiment of the proposed performance measurement and reporting system, the system allows users, such as several people involved, to record comments and suggestions for individual segments of the software application and associate the comments with the corresponding segments. The system is also configured to generate consolidated results and measurement and analysis reports, including user comments, in various file formats. These results and measurement reports can be used by others involved in the application. In addition, several actions can be taken based on the recorded recommendations to change or improve the performance of the application.
In an implementation, the system also facilitates the comparison of performance parameters over different profiling sessions. for example, the system identifies segments common to the first application whose profile was defined in a first session and to a second application whose profile was defined in a second session, compares performance parameters corresponding to common segments and generates a comparison report. It will be understood that the first and second applications can be the same or different versions of an application. Similarly, the first and second profiling sessions may have the same configuration, for example, with respect to hardware configuration or runtime conditions, or they may differ in configuration. The comparison report can include parameters, such as segment execution count, total cumulative response time, and average response time for common segments. The system thus facilitates the comparison of measurement results from two or more sessions simultaneously, and helps to diagnose the source of performance problems in the software application.
Although aspects of systems and methods for performance measurement and reporting can be implemented in any number of different computing systems, environments and / or configurations, the embodiments are described in the context of the following examples of system architecture (s).
Exemplary Systems. Fig. 1 illustrates a performance measurement and reporting system 102 for measuring the performance of an application, according to an embodiment of the present matter. The performance measurement and reporting system 102 is also called the PMR 102 system hereinafter. The PMR 102 system can be implemented as any of a variety of conventional computing devices, including a server, a personal desktop computer, a notebook, a portable computer, a workstation, a large computer, a computer device. mobile computing or a laptop.
In said embodiment, the PMR 102 system includes one or more processors 104, Input / Output (I / O) interfaces 106, network interfaces 108 and a memory 110 coupled to processor 104. Processor 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuit systems and / or any devices that manipulate signals based on operating instructions. Among other capabilities, processor 104 is configured to search and execute computer-readable instructions and data stored in memory 110.
I / O interfaces 106 can include a variety of software and hardware interfaces, for example, an interface for peripheral devices, such as a display unit, a keyboard, a mouse, an external memory, a printer, etc. Network interfaces 108 can allow the PMR 102 system to communicate with other computing devices and peripheral devices, such as network servers (web), and external databases over a network. Network interfaces 108 can facilitate multiple communications across a wide variety of networks and protocols, including wired networks, for example, LAN, cable, etc., and / or wireless networks, for example, WLAN, cellular, satellite, etc.
In an implementation, the PMR 102 system can be connected to a public network or a corporate network including multiple office personal computers, lap-tops, multiple servers, such as blade servers, databases or other computing devices. In another implementation, the PMR 102 system can be connected to a private network or to a home network with a limited number of personal computers and laptops. In yet another implementation, the PMR 102 system can be a stand-alone system. Memory 110 may include any computer-readable medium known in the art, including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM) and / or non-volatile memory, such as such as read-only memory (ROM), erasable programmable ROM, flash memories, hard drives, optical discs and magnetic tapes. Memory 110 includes program modules 112 and program data 114.
Program modules 112 include routines, programs, objects, data structures, etc., that perform specific tasks or implement specific abstract data types. Program modules 112 additionally include a profiling module 116, a report module 118, a comparison module 120 and other modules 122. The other modules 122 may include programs or coded instructions that supplement the applications and functions in the PMR system 102, for example, programs in the operating system.
Program data 114, among other things, serves as a repository for storing data processed, received and generated by one or more of program modules 112. Program data 114 includes performance data 124, report data 126, performance data comparison 128 and other data 130. The other data 130 can include data generated as a result of running one or more modules in program modules 112, just like the other modules 122. In addition, program data 114 can include applications 132 -1 ... 132-n, as will be explained below.
In operation, profiling module 116 can access one or more of applications 132-1, 132-2, .132-n, collectively designated as applications 132, for performance measurement. Profiling module 116 can receive applications 132 via I / O interfaces 106, such as from a compact disk drive (CD), or applications 132 can be accessed via a network or program data - ma 114. For purposes of illustration, applications 132 have been illustrated as being stored in program data 114. It will be understood, however, that applications may alternatively be part of program modules 112 or may be hosted on a storage device external device or other computing device.
In an implementation, profiling module 116 accesses an application, for example, application 132-1 for performance measurement and reporting. The 132-1 application typically includes multiple segments, such as methods, procedures, joins, and other pieces of code. Profiling module 116 then profiles application 132-1 in a profiling session, say, a first profiling session. It will be clear that a profiling session refers to a period of time during which an application's performance is measured.
To profile application 132-1, profiling module 116 runs application 132-1 under various conditions and measures performance parameters for segments executed in the first profiling session to obtain a data set of parameters. Performance parameters can include one or more metrics, such as CPU statistics, memory statistics, response time statistics, session related statistics, processing speed statistics, thread statistics, statistics related to system processes and queue-related statistics. It will be clear that this is only an indicative list of performance parameters, without the intention of being exhaustive or limiting. Profiling module 116 can store the measured parameter data set corresponding to the different segments of application 132-1 in performance data 124. Report module 118 then generates a session report that provides a summary of the data set parameter for the different segments. For example, the session report can include segment execution count, cumulative response time, average response time for each segment, etc. The session report can be stored in report data 126.
In addition, report module 118 provides session reporting to a user via a user interface, for example, a display device. In another case, report module 118 can also provide the session report as an output file or in any other format, such as a portable document format (PDF), spreadsheets, tabular format, graphics or any other defined format by the user. In yet another case, in the session report, the report module 118 can highlight or use any other technique to indicate the segments for which one or more performance parameters are outside a previously specified range. The previously specified range can be pre-stored, for example, in drawing data 124, or it can be informed by the user, for example, before profile definition. It will be understood that the range previously specified may vary from session to session based on the application whose profile is being defined and the technical specifications of the application.
After the session report has been provided to the user, report module 118 can prompt the user to make comments or recommendations to change or improve the performance of one or more of the 132-1 application segments. Comments can be related to any of the hardware configuration, runtime conditions, a part of the first application, the session period and the session duration.
User comments via I / O interfaces 106 are associated with the corresponding segments and saved with the parameter data set in performance data 124. In an implementation, multiple users can comment on the same session report, and the various comments can be saved with the parameter set. In such a case, comments provided by the various users can also be marked using, for example, a username, color coding, etc. In addition, comments received from users can be visually differentiated based on aspects such as the user name and a network ID. Report module 118 then generates a consolidated report including both the parameter data set and the recorded comments. The consolidated report is also stored in report data 126, and can then be used, for example, by the application team working on application 132-1.
Based on the session report and user comments, various actions can be taken, for example, actions related to application code 132-1, hardware configuration, runtime conditions, session period and session duration. The application is then resubmitted for profiling, for example, as the 132-2 application. It will be understood that application 132-2 can be the same or different version of application 132-1. Profiling module 116 again profiles application 132-2 in a second session and generates a second set of parameter data as described above. Based on the decided actions, the second profiling session may differ from the first profiling session in one or more of the hardware configuration, runtime conditions, session period and session duration. In addition, as previously mentioned, report module 118 can generate the session report and, if necessary, the consolidated report including user comments for application 132-2. This process of profiling and taking actions can be carried out iteratively until the application's performance conforms to the specified requests.
In an implementation, the user can choose to directly compare the parameter data sets generated in the two profiling sessions during the profiling of the two applications 132-1 and 132-2. In this case, comparison module 120 identifies segments common to applications 132-1 and 132-2, and compares parameter data sets to common segments. Report module 118 then generates a comparison report for common segments based on the compared parameter data sets.
In addition, report module 118 can selectively highlight parts of parameter data sets that are unfavorable or that have changed in one session compared to the other. For example, the first session corresponding to application 132-1 may have higher memory usage than the second session corresponding to application 132-2, but may have a shorter response time. Therefore, in the comparison report, the report module 118 can highlight memory usage statistics for the first session and response time statistics for the second session for immediate reference by the user.
In another implementation, a user can choose to compare multiple session profiling results, which can match the profiling of different versions of an application or different hardware configurations, or different runtime conditions or different session periods or different session durations or a combination of these. In one case, comparison module 120 identifies segments in applications whose profile has been defined in multiple sessions and compares parameter data sets to common segments. In another case, comparison module 120 identifies common segments and asks the user to indicate which of the common segments needs to be compared. The comparison module 120 then compares the parameter data sets for the indicated common segments.
After the comparison module 120 compares the parameter sets, the report module 118 can generate a comparison report in a manner similar to that discussed previously. The comparison report can also include comparative graphs and selective highlighting of parts of the parameter data sets that have improved, deteriorated or changed over the course of multiple sessions. In an implementation, the comparison report can also include a memory graph so that a user can report changes to the parameter data set with application memory uses. In addition, if one or more of the multiple sessions is profiled previously by profiling module 116 and user comments are recorded, the comparison report can include comments associated with common segments in one or more of the previous sessions. Thus, a user can directly compare the results of profiling multiple sessions, study the previous comments and analyze the effect of the various actions that were carried out throughout the sections. Fig. 2 illustrates a method 200 for measuring and reporting performance, according to an implementation of the present subject. Fig. 3 illustrates a method for measuring and reporting comparative performance, according to another implementation of the present article.
Methods 200 and 300 can be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, data structures, procedures, modules, functions, etc., that perform specific tasks or implement specific abstract data types. The methods can also be practiced in a distributed computing environment in which the functions are performed by remote processing devices that are connected via a communications network. In a distributed computing environment, computer executable instructions can be located on both remote and local computer storage media, including memory storage devices. The order in which the methods are described should not be interpreted as a limitation, and any number of the method blocks described can be combined in any order to implement the alternative methods or methods. In addition, the individual blocks can be excluded from the methods without diverging from the spirit and scope of the present matter described here. Furthermore, the methods can be implemented on any hardware, software, firmware or combination thereof.
Referring to method 200, in block 202, the performance profile of an application is defined in a profiling session and a parameter data set is generated. For example, profiling module 116 can profile application performance 132-1 by running multiple segments of application 132-1 and measuring performance parameters under various conditions to obtain the parameter data set. Performance parameters can include such metrics as CPU statistics, memory statistics, response time statistics, session-related statistics, processing speed statistics, thread statistics, statistics related to system processes and statistics related to row.
In block 204, the parameter data set is summarized and a session report is generated. In an implementation, report module 118 generates a session report that mentions the various segments in application 132-1 and the corresponding performance parameters, for example, the number of counts, the total response time, the average response time , maximum response time and other statistics. Report module 118 can also highlight segments for which one or more of the performance parameters fall outside a previously specified acceptable range.
In block 206, user comments are received to improve performance measurements, for example, by modifying one or more of the segments in the application or by changing the hardware configuration or runtime conditions or the session period or duration of the session. For example, report module 118 can prompt the user to make comments and can receive input, for example, via a keyboard.
In block 208, user comments are associated with the corresponding segments. In one implementation, reporting module 118 associates user comments with corresponding segments and stores associated comments in performance data 124.
In block 210, a consolidated performance report is generated, which includes both the parameter data set related to the various segments and user comments. This consolidated report can be used by several people involved, such as software developers, database administrators, project leaders, etc. The consolidated report can be generated by the report module 118.
In addition, performance metrics from multiple sessions, that is, two or more sessions, can be directly compared, for example, as illustrated in an implementation in Method 300.
In block 302, the parameter data set is received from two or more profiling sessions of an application. In an implementation, the profiling module 1116 can create an “n” number of sessions and generate parameter data sets for the “n” sessions. The “n” sessions can correspond to the profiling of different versions of an application or different hardware configurations or different runtime conditions or different session periods or different session durations or a combination of these.
In block 304, the common segments between the applications whose profile was defined in the multiple sessions are identified and the parameter data sets for the common segments are compared. In an implementation, comparison module 120 identifies common segments.
In block 306, a comparison report is generated using parameter data sets corresponding to one or more of the common segments. For example, report module 118 may prompt the user to select one or more of the common segments and / or parameters for comparison. Report module 118 can then generate the comparison report for the selected segments, where the comparison report can include comparative graphs and the selective highlighting of parameters that have changed over the course of multiple sessions. This comparison report can also include a memory graph so that a user can report changes to the parameter data set with application memory uses. In addition, if user comments have been recorded for one or more of the multiple sessions, the comparison report can include comments associated with the selected segments.
Thus, changes in the performance of an application over a period of time can also be effectively tracked and analyzed, thereby assisting in the creation and distribution of an efficient software application that satisfies the various related business, technical and functional requirements.
Although embodiments for performance measurement and reporting have been described in a language specific to structural aspects and / or methods, it should be understood that the invention is not necessarily limited to the specific aspects or methods described. Instead, specific aspects and methods are revealed as exemplary implementations for performance measurement and reporting systems and methods.
权利要求:
Claims (12)
[1]
1. Computer-implemented method for measuring and reporting application performance, FEATURED for understanding: generating, in a first profiling session, a first set of parameter data, including performance parameters, related to the performance of segments of a app; create a session report based on the first parameter data set; receive feedback to improve the performance of one or more of the application’s segments, based in part on the session report, where the comments relate to at least one of a hardware configuration, runtime conditions, a portion of the first application, a session period and session duration; associate the comments received with one or more corresponding segments of the application's segments; receive a second parameter data set corresponding to a second profiled application in a second profiling session; identify common segments between the first application and the second application; provide a consolidated report based on the first parameter data set and comments, where the consolidated report includes the first parameter data set and comments from one or more application segments; and generate a comparison report for at least one of the common segments, based on the first parameter data set and the second parameter data set, where the comparison report comprises comparative graphs and selective highlighting of parameters that changed during the first and second sessions.
[2]
2. Method implemented by computer, according to claim 1, CHARACTERIZED by the fact that the performance parameters comprise at least one among processor utilization, memory utilization, network utilization, input-output utilization, query statistics database, response time statistics, session related statistics, processing speed statistics, process thread statistics and queue-related statistics.
[3]
3. Method implemented by computer, according to claim 1, CHARACTERIZED by additionally comprising: determining whether a performance parameter in the first parameter data set is outside a pre-specified range; and request that one or more users provide feedback based on a positive determination.
[4]
4. Method implemented by computer, according to claim 3, CHARACTERIZED by the fact that the comments received from one or more users are visually differentiated based on at least one among a user name and a network ID.
[5]
5. Method implemented by computer, according to claim 1, CHARACTERIZED by the fact that the second application is at least one among the first application and a modified version of the first application.
[6]
6. Method implemented by computer, according to claim 1, CHARACTERIZED by the fact that the comparison report includes at least one comment received based in part on the session report.
[7]
7. Method implemented by computer, according to claim 1, CHARACTERIZED by the fact that the generation of the comparison report comprises indicating, in the comparison report, visually, the changes in the performance parameters of the parameter data sets corresponding to a or more of the common segments.
[8]
8. System (102) for performance measurement and reporting, the system being CHARACTERIZED because it comprises: a processor (104); and a memory (110) coupled to the processor (104), the memory (104) comprising: a profiling module (116) configured to define the performance profile of a first application and generate a first set of related parameter data the segments executed in the first application; a comparison module (120) configured to receive a second set of parameter data corresponding to a second profiled application in a second profiling session; identify common segments between the first application and the second application; and a reporting module (118) configured to provide a session report based on the first parameter data set; receive comments to improve the performance of one or more of the segments, where the comments relate to at least one of a hardware configuration, runtime conditions, a part of the first application, a session period and a duration of the session; associate the comments received with one or more of the corresponding segments of the application segments; and generate a consolidated report based on at least one of the first parameter data set and comments, where the consolidated report includes the first parameter data set and comments from one or more application segments, and a report comparison for at least one of the common segments, based on the first parameter data set and the second parameter data set, where the comparison report comprises comparative graphs and selective highlighting of parameters that changed during the first and second sessions
[9]
9. System, according to claim 8, CHARACTERIZED by the fact that the reporting module (118) is additionally configured to: determine if at least one data point in the first parameter data set deviates from a pre- specified; and indicate the deviation in a session report based on a positive determination.
[10]
10. System (102) according to claim 8, CHARACTERIZED by additionally comprising a comparison module (120) configured to identify common segments between two or more applications.
[11]
11. System (102), according to claim 10, CHARACTERIZED by the fact that the report module (118) is additionally configured to generate a comparison report to compare the performance of one or more segments of the common segments.
[12]
12. System (102), according to claim 11, CHARACTERIZED by the fact that the reporting module (118) is additionally configured to: determine if the performance of one or more of the common segments has changed over the two or more applications; and indicate a change in performance in the comparison report based on a positive determination.
类似技术:
公开号 | 公开日 | 专利标题
BR102012006254B1|2020-03-10|METHOD AND SYSTEM FOR MEASUREMENT AND PERFORMANCE REPORTING OF APPLICATIONS
US9367435B2|2016-06-14|Integration testing method and system for web services
Pinto et al.2017|Energy efficiency: a new concern for application software developers
Sambasivan et al.2011|Diagnosing performance changes by comparing request flows
US20140379669A1|2014-12-25|Feedback Optimized Checks for Database Migration
US11106573B2|2021-08-31|Regression testing of SQL execution plans for SQL statements
US9785432B1|2017-10-10|Automatic developer behavior classification
US20160077828A1|2016-03-17|Logical grouping of profile data
US20150186253A1|2015-07-02|Streamlined performance testing for developers
US10372711B2|2019-08-06|System and method predicting effect of cache on query elapsed response time during application development stage
KR102301946B1|2021-09-13|Visual tools for failure analysis in distributed systems
US9235410B2|2016-01-12|Tracking software package dependencies using a graph model
US20160091948A1|2016-03-31|Providing energy consumption analytics of cloud based service
US9830148B1|2017-11-28|Providing user-specific source code alert information
Ivanov et al.2015|Performance evaluation of enterprise big data platforms with HiBench
US10303517B1|2019-05-28|Automated evaluation of computer programming
BRPI0904262A2|2011-02-01|automated load model
US10346615B2|2019-07-09|Rule driven patch prioritization
US20190129980A1|2019-05-02|Nested controllers for migrating traffic between environments
Miranskyy et al.2018|Database engines: Evolution of greenness
Leite et al.2018|Comparative evaluation of open source business intelligence platforms for SME
US8799873B2|2014-08-05|Collecting tracepoint data
Lu et al.2013|A micro-benchmark suite for evaluating hadoop rpc on high-performance networks
US9880879B1|2018-01-30|Identifying task instance outliers based on metric data in a large scale parallel processing system
Erdelt2020|A framework for supporting repetition and evaluation in the process of cloud-based DBMS performance benchmarking
同族专利:
公开号 | 公开日
JP6072423B2|2017-02-01|
US20130030764A1|2013-01-31|
US9311211B2|2016-04-12|
CN102902621B|2017-04-12|
CN102902621A|2013-01-30|
BR102012006254A2|2013-07-30|
EP2557503A3|2013-05-08|
EP2557503A2|2013-02-13|
EP2557503B1|2020-04-01|
SG187306A1|2013-02-28|
JP2013030151A|2013-02-07|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

GB2324892B|1997-03-14|1999-12-01|Interactuality Limited|Process monitoring system|
US6467052B1|1999-06-03|2002-10-15|Microsoft Corporation|Method and apparatus for analyzing performance of data processing system|
US6567767B1|2000-09-19|2003-05-20|Unisys Corporation|Terminal server simulated client performance measurement tool|
JP2002149449A|2000-11-15|2002-05-24|Hitachi Software Eng Co Ltd|Method for developing program|
JP2003114813A|2001-10-03|2003-04-18|Ibm Japan Ltd|Analysis server, program analysis network system and program analysis method|
US7363543B2|2002-04-30|2008-04-22|International Business Machines Corporation|Method and apparatus for generating diagnostic recommendations for enhancing process performance|
JP4528116B2|2002-06-25|2010-08-18|インターナショナル・ビジネス・マシーンズ・コーポレーション|Method and system for monitoring application performance in a distributed environment|
US6975963B2|2002-09-30|2005-12-13|Mcdata Corporation|Method and system for storing and reporting network performance metrics using histograms|
US6993453B2|2003-10-28|2006-01-31|International Business Machines Corporation|Adjusted monitoring in a relational environment|
US6970805B1|2003-12-23|2005-11-29|Unisys Corporatiion|Analysis of data processing system performance|
US20070260735A1|2006-04-24|2007-11-08|International Business Machines Corporation|Methods for linking performance and availability of information technology resources to customer satisfaction and reducing the number of support center calls|
US7975257B2|2006-06-13|2011-07-05|Microsoft Corporation|Iterative static and dynamic software analysis|
WO2008038389A1|2006-09-28|2008-04-03|Fujitsu Limited|Program performance analyzing apparatus|
JP2010061461A|2008-09-04|2010-03-18|Ricoh Co Ltd|System for automatic evaluation of software performance|
CN101533366A|2009-03-09|2009-09-16|浪潮电子信息产业股份有限公司|Method for acquiring and analyzing performance data of server|
US8555259B2|2009-12-04|2013-10-08|International Business Machines Corporation|Verifying function performance based on predefined count ranges|
CN102231130B|2010-01-11|2015-06-17|国际商业机器公司|Method and device for analyzing computer system performances|EP2074505A4|2006-10-05|2010-01-13|Splunk Inc|Time series search engine|
US9087041B2|2012-07-24|2015-07-21|Michael Weir|Enterprise test system platform and associated method for interoperable test data management, test development, test libraries and test workflow management and automation|
US10225136B2|2013-04-30|2019-03-05|Splunk Inc.|Processing of log data and performance data obtained via an application programming interface |
US10318541B2|2013-04-30|2019-06-11|Splunk Inc.|Correlating log data with performance measurements having a specified relationship to a threshold value|
US10019496B2|2013-04-30|2018-07-10|Splunk Inc.|Processing of performance data and log data from an information technology environment by using diverse data stores|
US10997191B2|2013-04-30|2021-05-04|Splunk Inc.|Query-triggered processing of performance data and log data from an information technology environment|
US10614132B2|2013-04-30|2020-04-07|Splunk Inc.|GUI-triggered processing of performance data and log data from an information technology environment|
US10346357B2|2013-04-30|2019-07-09|Splunk Inc.|Processing of performance data and structure data from an information technology environment|
US10353957B2|2013-04-30|2019-07-16|Splunk Inc.|Processing of performance data and raw log data from an information technology environment|
US9740363B2|2013-10-02|2017-08-22|Velocity Technology Solutions, Inc.|Methods and systems for managing community information|
IN2014MU00819A|2014-03-11|2015-09-25|Tata Consultancy Services Ltd|
CN105096096A|2014-04-29|2015-11-25|阿里巴巴集团控股有限公司|Task performance evaluation method and system|
US9652812B2|2014-07-01|2017-05-16|International Business Machines Corporation|Obtaining software asset insight by analyzing collected metrics using analytic services|
CN104267995B|2014-09-30|2018-10-16|北京金山安全软件有限公司|Processing method, device and the terminal of application program|
US20160378545A1|2015-05-10|2016-12-29|Apl Software Inc.|Methods and architecture for enhanced computer performance|
CN105095076B|2015-07-17|2019-04-12|北京金山安全软件有限公司|Compatibility test method and device between Software Edition|
US9818239B2|2015-08-20|2017-11-14|Zendrive, Inc.|Method for smartphone-based accident detection|
CN108139456B|2015-08-20|2022-03-04|泽安驾驶公司|Method for assisting navigation by accelerometer|
EP3200080B1|2015-12-16|2021-12-22|Tata Consultancy Services Limited|Methods and systems for memory suspect detection|
US10949323B2|2016-08-26|2021-03-16|Hitachi, Ltd.|Application management system, method, and computer program|
CN106357480B|2016-11-23|2020-02-14|北京蓝海讯通科技股份有限公司|Method and device for monitoring network performance of application and mobile terminal|
DE102016223484B4|2016-11-25|2021-04-15|Fujitsu Limited|Determine Similarities in Computer Software Codes for Performance Analysis|
US10304329B2|2017-06-28|2019-05-28|Zendrive, Inc.|Method and system for determining traffic-related characteristics|
US10559196B2|2017-10-20|2020-02-11|Zendrive, Inc.|Method and system for vehicular-related communications|
WO2019104348A1|2017-11-27|2019-05-31|Zendrive, Inc.|System and method for vehicle sensing and analysis|
WO2021113475A1|2019-12-03|2021-06-10|Zendrive, Inc.|Method and system for risk determination of a route|
CN111459815A|2020-03-30|2020-07-28|吉林大学|Real-time computing engine testing method and system|
法律状态:
2013-07-30| B03A| Publication of an application: publication of a patent application or of a certificate of addition of invention|
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law|
2019-10-15| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure|
2020-02-04| B09A| Decision: intention to grant|
2020-03-10| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 20/03/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
IN2149MU2011|2011-07-28|
IN2149/MUM/2011|2011-07-28|
[返回顶部]